A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Rajagopal, T. K. P.
- Optimizing the Usage of Bandwidth in Heterogeneous Wireless Networks Using CTSP Mechanism
Authors
1 Kathir College of Engineering, Coimbatore, Tamilnadu, IN
Source
Wireless Communication, Vol 2, No 11 (2010), Pagination: 436-439Abstract
The main objective of this paper is to reduce the total bandwidth consumption and to share the unused bandwidth in wireless networks. In this paper we use a mechanism to select the cells and wireless technologies for video multicasting. This mechanism formulate cell and technology selection problem (CTSP) as an optimization problem CTSP. To solve this problem we use Legrangian based distributed algorithm. In this paper we propose a new hierarchical bandwidth sharing based on the auction theory. In our proposed model, Wi-Fi and UMTS serve the WLAN APs as primary networks, sharing their licensed spectrum as double auction. Simulation results show the proposed scheme could improve the utility of PNs’ spectrum, and provide good services to the WLAN users.
Keywords
Hierarchical Bandwidth Sharing, Double Action, Licensed Spectrum, Auction Theory.- Data Security and Privacy in Cloud Using RC6 and SHA Algorithms
Authors
1 Kathir College of Engineering, CBE, IN
Source
Networking and Communication Engineering, Vol 6, No 5 (2014), Pagination: 202-205Abstract
In this paper we describe a new security designed data privacy and security in cloud storage system. The security algorithm is taken in different approaches to provide security in many cloud provider for cloud server. In cryptographic cloud implementation algorithms are Caesar cipher, RSA Algorithm, Vinegere Cipher, Homomorphic Encryption, RC Block Cipher, DES Algorithm etc.,. The cryptographic Encryption and Decryption taken as Client and server side processes. In Encryption process takes in Client side using an RC6 (Block Cipher Algorithm) and SHA3 (Secure Hash Algorithm) before data store in the cloud server or cloud storage. The server side decryption is taken in Authentication methodology. The access data in cloud categorized in (Public, Hybrid and Private Cloud) Public cloud access in open access documents to view not download, the Hybrid User can access the document with authentication code from the cloud server system, Private User can Access all type of secured data get Authenticate from the client system or client Email code verification in the Cloud Data. Authentication process to prevent the intruder, malicious attacker, Inside Attacker, Outside Attacker.Keywords
Cloud Computing, RC6 Algorithm, SHA3 Algorithm, Cryptography, Encryption, Decryption, Security.- A Phenomenal Measurement Used for Congregation of Informations
Authors
1 Kathir College of Engineering, Coimbatore, Tamilnadu, IN
Source
Data Mining and Knowledge Engineering, Vol 6, No 6 (2014), Pagination: 269-272Abstract
Data mining technology is used for identifying and extracting the patterns from large volume of data. The technology is mainly used for extracting the unknown patterns for real time, financial and business applications. In data mining, the conventional clustering algorithms are used for grouping the data sets. There are number of algorithms which are used to solve this problem. Fuzzy clustering methods have the potential to manage such situations efficiently. In this paper, we propose a k-means fuzzy clustering method which is more efficient in handling outlier points and has been considered as useful means for identifying patterns and trends of large volume of data. It is a computational intelligence discipline which has emerged as a valuable tool for analyzing the data, discovery of knowledge and decision making. The unprocessed unlabeled data from the large volume of dataset can be categorized initially in an unsupervised fashion by using cluster analysis i.e. clustering the assignment of a set of remarks into clusters so that remarks in the same cluster may be in some sense be treated as similar. The result of the clustering process and efficiency of its domain application are generally resolute through algorithms.Keywords
Clustering, Outlier Points, Knowledge Discovery, K-Means, Time Complexity.- An Evaluation under Concealment of Duplication Entities in XML Documents
Authors
1 Kathir College of Engineering, Coimbatore, Tamilnadu, IN
Source
Artificial Intelligent Systems and Machine Learning, Vol 6, No 6 (2014), Pagination: 224-228Abstract
Detecting duplicates is a significant of data cleaning; the mission is to recognize multiple representations of a same real-world data or business data and necessary to improve the value of data. Number of approaches both for relational and XML data are exist. As XML is popularly used for data exchange and data publishing on the Web, algorithms to detect duplicates in XML documents are required. XML is a language used for publish data on web so the possibility of error and noise will occur. Hence, the data should be cleaned, which requires solutions for fuzzy duplicate detection in XML. The hierarchical and semi-structured nature of XML strongly differs from the flat and structured relational model, which has received the main attention in duplicate detection so far. We consider the challenges of detecting duplicates in XML to develop valuable, well-organized solutions to the problem. We present a comparison of algorithms, which are used to perform duplicate detection effectively for all kinds of XML objects, given dependencies between different XML elements.